Goto

Collaborating Authors

 black woman


Digital blackface flourishes under Trump and AI: 'The state is bending reality'

The Guardian

Digital blackface flourishes under Trump and AI: 'The state is bending reality' Late last year, as a US government shutdown cut off the Snap benefits that low-income families rely on for groceries, videos on social media cast the fallout in frantic scenes. "Imma keep it real with you," a Black woman said in a viral TikTok post, "I get over $2,500 a month in stamps. I sell'em, $2,000 worth, for about $1,200-$1,500 cash." Another Black woman ranted about taxpayers' responsibility to her seven children with seven men, and yet another melted down after her food stamps were rejected at a corn-dog counter. Visible watermarks stamped some videos as AI-generated - apparently, too faintly for the racist commentators and hustlers more than happy to believe the frenzy was real.


There's Never Been a Worse Time to Be Authentic at Work

WIRED

There's Never Been a Worse Time to Be Authentic at Work Workers have been told to bring themselves to work, only to be disappointed time and time again, argues author Jodi-Ann Burey in her new book. Jodi-Ann Burey was only two weeks into her new role as an inclusion marketing manager for an outdoor retail company when she was accused of having a "race agenda." Burey, who is Black, was no stranger to workplace hypocrisy; as she sees it, the office is a petri dish where the knotty dynamics of society are concentrated. At the time of the accusation in February 2020, however, all she could do was laugh. "I was like, you knew who I was before you poached me. This is exactly what you wanted me to do," she says over Zoom.


Clustering Discourses: Racial Biases in Short Stories about Women Generated by Large Language Models

Bonil, Gustavo, Gondim, João, Santos, Marina dos, Hashiguti, Simone, Maia, Helena, Silva, Nadia, Pedrini, Helio, Avila, Sandra

arXiv.org Artificial Intelligence

This study investigates how large language models, in particular LLaMA 3.2-3B, construct narratives about Black and white women in short stories generated in Portuguese. From 2100 texts, we applied computational methods to group semantically similar stories, allowing a selection for qualitative analysis. Three main discursive representations emerge: social overcoming, ancestral mythification and subjective self-realization. The analysis uncovers how grammatically coherent, seemingly neutral texts materialize a crystallized, colo-nially structured framing of the female body, reinforcing historical inequalities. The study proposes an integrated approach, that combines machine learning techniques with qualitative, manual discourse analysis.


Yet another algorithmic bias: A Discursive Analysis of Large Language Models Reinforcing Dominant Discourses on Gender and Race

Bonil, Gustavo, Hashiguti, Simone, Silva, Jhessica, Gondim, João, Maia, Helena, Silva, Nádia, Pedrini, Helio, Avila, Sandra

arXiv.org Artificial Intelligence

With the advance of Artificial Intelligence (AI), Large Language Models (LLMs) have gained prominence and been applied in diverse contexts. As they evolve into more sophisticated versions, it is essential to assess whether they reproduce biases, such as discrimination and racialization, while maintaining hegemonic discourses. Current bias detection approaches rely mostly on quantitative, automated methods, which often overlook the nuanced ways in which biases emerge in natural language. This study proposes a qualitative, discursive framework to complement such methods. Through manual analysis of LLM-generated short stories featuring Black and white women, we investigate gender and racial biases. We contend that qualitative methods such as the one proposed here are fundamental to help both developers and users identify the precise ways in which biases manifest in LLM outputs, thus enabling better conditions to mitigate them. Results show that Black women are portrayed as tied to ancestry and resistance, while white women appear in self-discovery processes. These patterns reflect how language models replicate crystalized discursive representations, reinforcing essentialization and a sense of social immobility. When prompted to correct biases, models offered superficial revisions that maintained problematic meanings, revealing limitations in fostering inclusive narratives. Our results demonstrate the ideological functioning of algorithms and have significant implications for the ethical use and development of AI. The study reinforces the need for critical, interdisciplinary approaches to AI design and deployment, addressing how LLM-generated discourses reflect and perpetuate inequalities.


Analyzing Breast Cancer Survival Disparities by Race and Demographic Location: A Survival Analysis Approach

Farha, Ramisa, Olukoya, Joshua O.

arXiv.org Artificial Intelligence

This study employs a robust analytical framework to uncover patterns in survival outcomes among breast cancer patients from diverse racial and geographical backgrounds. This research uses the SEER 2021 dataset to analyze breast cancer survival outcomes to identify and comprehend dissimilarities. Our approach integrates exploratory data analysis (EDA), through this we identify key variables that influence survival rates and employ survival analysis techniques, including the Kaplan-Meier estimator and log-rank test and the advanced modeling Cox Proportional Hazards model to determine how survival rates vary across racial groups and countries. Model validation and interpretation are undertaken to ensure the reliability of our findings, which are documented comprehensively to inform policymakers and healthcare professionals. The outcome of this paper is a detailed version of statistical analysis that not just highlights disparities in breast cancer treatment and care but also serves as a foundational tool for developing targeted interventions to address the inequalities effectively. Through this research, our aim is to contribute to the global efforts to improve breast cancer outcomes and reduce treatment disparities.


Seeking Mavis Beacon: the search for an elusive Black tech hero

The Guardian

Before bashing out emails and text messages by thumb became an accepted form of communication, typing was a fully manual skill. In the 80s, "the office" was an exclusive preserve for freaks who could type 40 words per minute at least. Those too modest or miserly to sign up for brick-and-mortar classes could pick up a software program called Mavis Beacon Teaches Typing for 50. At my Catholic high school, the application was the typing class. The priests just switched on the computers.


Social perception of faces in a vision-language model

Hausladen, Carina I., Knott, Manuel, Camerer, Colin F., Perona, Pietro

arXiv.org Artificial Intelligence

We explore social perception of human faces in CLIP, a widely used open-source vision-language model. To this end, we compare the similarity in CLIP embeddings between different textual prompts and a set of face images. Our textual prompts are constructed from well-validated social psychology terms denoting social perception. The face images are synthetic and are systematically and independently varied along six dimensions: the legally protected attributes of age, gender, and race, as well as facial expression, lighting, and pose. Independently and systematically manipulating face attributes allows us to study the effect of each on social perception and avoids confounds that can occur in wild-collected data due to uncontrolled systematic correlations between attributes. Thus, our findings are experimental rather than observational. Our main findings are three. First, while CLIP is trained on the widest variety of images and texts, it is able to make fine-grained human-like social judgments on face images. Second, age, gender, and race do systematically impact CLIP's social perception of faces, suggesting an undesirable bias in CLIP vis-a-vis legally protected attributes. Most strikingly, we find a strong pattern of bias concerning the faces of Black women, where CLIP produces extreme values of social perception across different ages and facial expressions. Third, facial expression impacts social perception more than age and lighting as much as age. The last finding predicts that studies that do not control for unprotected visual attributes may reach the wrong conclusions on bias. Our novel method of investigation, which is founded on the social psychology literature and on the experiments involving the manipulation of individual attributes, yields sharper and more reliable observations than previous observational methods and may be applied to study biases in any vision-language model.


BiasKG: Adversarial Knowledge Graphs to Induce Bias in Large Language Models

Luo, Chu Fei, Ghawanmeh, Ahmad, Zhu, Xiaodan, Khattak, Faiza Khan

arXiv.org Artificial Intelligence

Modern large language models (LLMs) have a significant amount of world knowledge, which enables strong performance in commonsense reasoning and knowledge-intensive tasks when harnessed properly. The language model can also learn social biases, which has a significant potential for societal harm. There have been many mitigation strategies proposed for LLM safety, but it is unclear how effective they are for eliminating social biases. In this work, we propose a new methodology for attacking language models with knowledge graph augmented generation. We refactor natural language stereotypes into a knowledge graph, and use adversarial attacking strategies to induce biased responses from several open- and closed-source language models. We find our method increases bias in all models, even those trained with safety guardrails. This demonstrates the need for further research in AI safety, and further work in this new adversarial space.


Why Google's AI tool was slammed for showing images of people of colour

Al Jazeera

America's founding fathers depicted as Black women and Ancient Greek warriors as Asian women and men – this was the world reimagined by Google's generative AI tool, Gemini, in late February. The launch of the new image generation feature sent social media platforms into a flurry of intrigue and confusion. When users entered any prompts to create AI-generated images of people, Gemini was largely showing them results featuring people of colour – whether appropriate or not. X users shared laughs while repeatedly trying to generate images of white people on Gemini and failing to do so. While some instances were deemed humorous online, others, such as images of brown people wearing World War II Nazi uniforms with swastikas on them, prompted outrage, prompting Google to temporarily disable the tool.


When Love and the Algorithm Don't Mix

TIME - Tech

When I met my husband, who happens to be white, he told me that he was always seeing women with blonde hair on Tinder and he's not really into blondes. No matter how many times he had swiped left on blondes, the algorithms were always recommending them to him, presumably because pop culture dictates that white men prefer blondes. Luckily for us, the algorithms' tendency to stack blonde women in his swipe deck worked out in our favor because I'm a black woman who, at the time, had blonde hair. In nearly 10 years of swiping through profiles on Tinder, Bumble, Hinge, and OkCupid, I learned that dating apps can provide pathways for finding friendship, adventure, romance, and sometimes, love. But there was one aspect of dating app culture that I couldn't ignore because it was often the first thing matches wanted to talk about: race.